3 research outputs found
A Survey on Actionable Knowledge
Actionable Knowledge Discovery (AKD) is a crucial aspect of data mining that
is gaining popularity and being applied in a wide range of domains. This is
because AKD can extract valuable insights and information, also known as
knowledge, from large datasets. The goal of this paper is to examine different
research studies that focus on various domains and have different objectives.
The paper will review and discuss the methods used in these studies in detail.
AKD is a process of identifying and extracting actionable insights from data,
which can be used to make informed decisions and improve business outcomes. It
is a powerful tool for uncovering patterns and trends in data that can be used
for various applications such as customer relationship management, marketing,
and fraud detection. The research studies reviewed in this paper will explore
different techniques and approaches for AKD in different domains, such as
healthcare, finance, and telecommunications. The paper will provide a thorough
analysis of the current state of AKD in the field and will review the main
methods used by various research studies. Additionally, the paper will evaluate
the advantages and disadvantages of each method and will discuss any novel or
new solutions presented in the field. Overall, this paper aims to provide a
comprehensive overview of the methods and techniques used in AKD and the impact
they have on different domains
Unmasking the giant: A comprehensive evaluation of ChatGPT's proficiency in coding algorithms and data structures
The transformative influence of Large Language Models (LLMs) is profoundly
reshaping the Artificial Intelligence (AI) technology domain. Notably, ChatGPT
distinguishes itself within these models, demonstrating remarkable performance
in multi-turn conversations and exhibiting code proficiency across an array of
languages. In this paper, we carry out a comprehensive evaluation of ChatGPT's
coding capabilities based on what is to date the largest catalog of coding
challenges. Our focus is on the python programming language and problems
centered on data structures and algorithms, two topics at the very foundations
of Computer Science. We evaluate ChatGPT for its ability to generate correct
solutions to the problems fed to it, its code quality, and nature of run-time
errors thrown by its code. Where ChatGPT code successfully executes, but fails
to solve the problem at hand, we look into patterns in the test cases passed in
order to gain some insights into how wrong ChatGPT code is in these kinds of
situations. To infer whether ChatGPT might have directly memorized some of the
data that was used to train it, we methodically design an experiment to
investigate this phenomena. Making comparisons with human performance whenever
feasible, we investigate all the above questions from the context of both its
underlying learning models (GPT-3.5 and GPT-4), on a vast array sub-topics
within the main topics, and on problems having varying degrees of difficulty